Bi-modal emotion recognition from expressive face and body gestures
نویسندگان
چکیده
Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone.
منابع مشابه
Survey on Emotional Body Gesture Recognition
Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as "body language" and ...
متن کاملThe face of emotions: a logical formalization of expressive speech acts
In this paper, we merge speech act theory, emotion theory, and logic. We propose a modal logic that integrates the concepts of belief, goal, ideal and responsibility and that allows to describe what a given agent expresses in the context of a conversation with another agent. We use the logic in order to provide a systematic analysis of expressive speech acts, that is, speech acts that are aimed...
متن کاملSemantic Audio-Visual Data Fusion for Automatic Emotion Recognition
The paper describes a novel technique for the recognition of emotions from multimodal data. We focus on the recognition of the six prototypic emotions. The results from the facial expression recognition and from the emotion recognition from speech are combined using a bi-modal multimodal semantic data fusion model that determines the most probable emotion of the subject. Two types of models bas...
متن کاملHand Gesture Recognition From Video
Gestures are the very expressive and meaningful body motions. Gesture recognition provides meaningful expressions of motion by a human involving the hands, arms, face, head, and body. Human motion capture is used both when the subject is viewed as a single object and articulated motion with a number of joints. For example, Speech and handwriting, here gestures vary between individuals, and even...
متن کاملDevelopmental perspectives on the expression of motion in speech and gesture: A comparison of French and English
Recent research shows that adult speakers of verbvs. satellite-framed languages (Talmy, 2000) express motion events in language-specific ways in speech (Slobin 1996, 2004) and co-verbal gestures (Duncan 2005; Kita & Özyurek 2003; McNeill 1992). Although such findings suggest cross-linguistic differences in the expression of events, little is still known about their implications for first langua...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- J. Network and Computer Applications
دوره 30 شماره
صفحات -
تاریخ انتشار 2007